135 research outputs found

    Lossy network correlated data gathering with high-resolution coding

    Get PDF
    Sensor networks measuring correlated data are considered, where the task is to gather data from the network nodes to a sink. A specific scenario is addressed, where data at nodes are lossy coded with high-resolution, and the information measured by the nodes has to be reconstructed at the sink within both certain total and individual distortion bounds. The first problem considered is to find the optimal transmission structure and the rate-distortion allocations at the various spatially located nodes, such as to minimize the total power consumption cost of the network, by assuming fixed nodes positions. The optimal transmission structure is the shortest path tree and the problems of rate and distortion allocation separate in the high-resolution case, namely, first the distortion allocation is found as a function of the transmission structure, and second, for a given distortion allocation, the rate allocation is computed. The second problem addressed is the case when the node positions can be chosen, by finding the optimal node placement for two different targets of interest, namely total power minimization and network lifetime maximization. Finally, a node placement solution that provides a tradeoff between the two metrics is proposed

    Network correlated data gathering with explicit communication: NP-completeness and algorithms

    Get PDF
    We consider the problem of correlated data gathering by a network with a sink node and a tree-based communication structure, where the goal is to minimize the total transmission cost of transporting the information collected by the nodes, to the sink node. For source coding of correlated data, we consider a joint entropy-based coding model with explicit communication where coding is simple and the transmission structure optimization is difficult. We first formulate the optimization problem definition in the general case and then we study further a network setting where the entropy conditioning at nodes does not depend on the amount of side information, but only on its availability. We prove that even in this simple case, the optimization problem is NP-hard. We propose some efficient, scalable, and distributed heuristic approximation algorithms for solving this problem and show by numerical simulations that the total transmission cost can be significantly improved over direct transmission or the shortest path tree. We also present an approximation algorithm that provides a tree transmission structure with total cost within a constant factor from the optimal

    Networked Slepian-Wolf: theory, algorithms, and scaling laws

    Get PDF
    Consider a set of correlated sources located at the nodes of a network, and a set of sinks that are the destinations for some of the sources. The minimization of cost functions which are the product of a function of the rate and a function of the path weight is considered, for both the data-gathering scenario, which is relevant in sensor networks, and general traffic matrices, relevant for general networks. The minimization is achieved by jointly optimizing a) the transmission structure, which is shown to consist in general of a superposition of trees, and b) the rate allocation across the source nodes, which is done by Slepian-Wolf coding. The overall minimization can be achieved in two concatenated steps. First, the optimal transmission structure is found, which in general amounts to finding a Steiner tree, and second, the optimal rate allocation is obtained by solving an optimization problem with cost weights determined by the given optimal transmission structure, and with linear constraints given by the Slepian-Wolf rate region. For the case of data gathering, the optimal transmission structure is fully characterized and a closed-form solution for the optimal rate allocation is provided. For the general case of an arbitrary traffic matrix, the problem of finding the optimal transmission structure is NP-complete. For large networks, in some simplified scenarios, the total costs associated with Slepian-Wolf coding and explicit communication (conditional encoding based on explicitly communicated side information) are compared. Finally, the design of decentralized algorithms for the optimal rate allocation is analyzed

    An Online Multiple Kernel Parallelizable Learning Scheme

    Full text link
    The performance of reproducing kernel Hilbert space-based methods is known to be sensitive to the choice of the reproducing kernel. Choosing an adequate reproducing kernel can be challenging and computationally demanding, especially in data-rich tasks without prior information about the solution domain. In this paper, we propose a learning scheme that scalably combines several single kernel-based online methods to reduce the kernel-selection bias. The proposed learning scheme applies to any task formulated as a regularized empirical risk minimization convex problem. More specifically, our learning scheme is based on a multi-kernel learning formulation that can be applied to widen any single-kernel solution space, thus increasing the possibility of finding higher-performance solutions. In addition, it is parallelizable, allowing for the distribution of the computational load across different computing units. We show experimentally that the proposed learning scheme outperforms the combined single-kernel online methods separately in terms of the cumulative regularized least squares cost metric.Comment: 5 pages, 2 figure

    Quantization in Graph Convolutional Neural Networks

    Get PDF
    submittedVersio

    Reliable Multicast D2D Communication over Multiple Channels in Underlay Cellular Networks

    Get PDF
    Author's accepted manuscript© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Multicast device-to-device (D2D) communications operating underlay with cellular networks is a spectral efficient technique for disseminating data to the nearby receivers. However, due to critical challenges such as, mitigating mutual interference and unavailability of perfect channel state information (CSI), the resource allocation to multicast groups needs significant attention. In this work, we present a framework for joint channel assignment and power allocation strategy to maximize the sum rate of the combined network. The proposed framework allows access of multiple channels to the multicast groups, thus improving the achievable rate of the individual groups. Furthermore, fairness in allocating resources to the multicast groups is also ensured by augmenting the objective with a penalty function. In addition, considering imperfect CSI, the framework guarantees to provide rate above a specified outage for all the users. The formulated problem is a mixed integer nonconvex program which requires exponential complexity to obtain the optimal solution. To tackle this, we first introduce auxiliary variables to decouple the original problem into smaller power allocation problems and a channel assignment problem. Next, with the aid of fractional programming via a quadratic transformation, we obtain an efficient power allocation solution by alternating optimization. The solution for channel assignment is obtained by convex relaxation of integer constraints. Finally, we demonstrate the merit of the proposed approach by simulations, showing a higher and a more robust network throughput. Index Terms—D2D multicast communications, resource allocation, imperfect CSI, fractional programming.acceptedVersio

    Error-Rate Dependence of Non-Bandlimited Signals with Finite Rate of Innovation

    Get PDF
    We consider the rate-distortion problem for non-bandlimited signals that have a finite rate of innovation, in particular, the class of continuous periodic stream of Diracs, characterized by a set of time positions and weights. Previous research has only considered the sampling of these signals, ignoring quantization, which is necessary for any practical application (e.g. UWB, CDMA). In order to achieve accuracy under quantization, we introduce two types of oversampling, namely, oversampling in frequency and oversampling in time. The reconstruction accuracy is measured by the MSE of the time positions. High accuracy is achieved by enforcing the reconstruction to satisfy either three convex sets of constraints related to: 1) sampling kernel, 2) quantization and 3) periodic streams of Diracs, which is then said to provide strong consistency or only the first two, providing weak consistency. We propose reconstruction algorithms for both weak and strong consistency. Regarding the rate, we also consider a threshold crossing based scheme, which is more efficient than the PCM encoding. We compare the rate- distortion behavior that is obtained from both increasing the oversampling in time and in frequency, on the one hand, and, on the other hand, from decreasing the quantization stepsize
    corecore